3 research outputs found

    Source Camera Verification from Strongly Stabilized Videos

    Full text link
    Image stabilization performed during imaging and/or post-processing poses one of the most significant challenges to photo-response non-uniformity based source camera attribution from videos. When performed digitally, stabilization involves cropping, warping, and inpainting of video frames to eliminate unwanted camera motion. Hence, successful attribution requires the inversion of these transformations in a blind manner. To address this challenge, we introduce a source camera verification method for videos that takes into account the spatially variant nature of stabilization transformations and assumes a larger degree of freedom in their search. Our method identifies transformations at a sub-frame level, incorporates a number of constraints to validate their correctness, and offers computational flexibility in the search for the correct transformation. The method also adopts a holistic approach in countering disruptive effects of other video generation steps, such as video coding and downsizing, for more reliable attribution. Tests performed on one public and two custom datasets show that the proposed method is able to verify the source of 23-30% of all videos that underwent stronger stabilization, depending on computation load, without a significant impact on false attribution

    Video Source Characterization Using Encoding and Encapsulation Characteristics

    Full text link
    We introduce a new method for camera-model identification. Our approach combines two independent aspects of video file generation corresponding to video coding and media data encapsulation. To this end, a joint representation of the overall file metadata is developed and used in conjunction with a two-level hierarchical classification method. At the first level, our method groups videos into metaclasses considering several abstractions that represent high-level structural properties of file metadata. This is followed by a more nuanced classification of classes that comprise each metaclass. The method is evaluated on more than 20K videos obtained by combining four public video datasets. Tests show that a balanced accuracy of 91% is achieved in correctly identifying the class of a video among 119 video classes. This corresponds to an improvement of 6.5% over the conventional approach based on video file encapsulation characteristics. Furthermore, we investigate a setting relevant to forensic file recovery operations where file metadata cannot be located or are missing but video data is partially available. By estimating a partial list of encoding parameters from coded video data, we demonstrate that an identification accuracy of 57% can be achieved in camera-model identification in the absence of any other file metadata
    corecore